Differential Equation
Table of Contents
- 1. Ordinary Differential Equations
- 1.1. Elementary First Order
- 1.2. First Order Linear Inhomogeneous
- 1.3. First Order Nonlinear
- 1.4. Special First Order
- 1.5. With Constant Coefficients
- 1.6. Autonomous
- 1.7. Linear Homogeneous
- 1.8. Linear Inhomogeneous
- 1.9. Special Second Order
- 1.9.1. Sturm-Liouville Theory
- 1.9.1.1. Regular Sturm-Liouville Problem
- 1.9.1.2. Time-Independent Schrödinger Equation
- 1.9.1.3. Chebyshev Differential Equations
- 1.9.1.4. Legendre's Differential Equation
- 1.9.1.5. Laguerre's Differential Equation
- 1.9.1.6. (Physicist's) Hermite's Differential Equation
- 1.9.1.7. Bessel's Differential Equation
- 1.9.2. Riemann's Differential Equation
- 1.9.1. Sturm-Liouville Theory
- 1.10. Special Higher Order
- 1.11. Nonlinear
- 1.12. First Order Coupled Nonlinear
- 1.13. Properties
- 1.14. Singularity
- 2. Partial Differential Equations
- 3. Green's Function
- 4. Laplace Transformation
- 5. Hamiltonian Flow
- 6. Matrix Differential Equation
- 7. Wronskian
- 8. Liouville's Formula
- 9. Boundary Conditions
- 10. Existence and Uniqueness
- 11. Stochastic Differential Equation
- 12. External Links
- 13. References
1. Ordinary Differential Equations
1.1. Elementary First Order
- The general formula of the solution is known.
1.1.1. Separable
- \[ P(x)\,dx + Q(y)\,dy = 0 \]
- Separate the variables and integrate separately.
1.1.2. Exact
- \[ P(x,y)\,dx + Q(x,y)\,dy = 0 \]
- Such that it equals to \(d\phi\), that is, \(\exists \phi\) such
that:
- \[ \frac{\partial \phi}{\partial x} = P(x,y) \]
- \[ \frac{\partial \phi}{\partial y} = Q(x,y) \]
- Perform the line integral from \((x_0, y_0)\) to \((x,y)\).
1.1.3. Homogeneous in x and y
- \[ P(x,y)\,dx + Q(x,y)\,dy = 0 \]
- With \(P(x,y)\) and \(Q(x,y)\) are both homogeneous polynomials
of the same order \(k\).
- That is to say each term is of order \(k+1\), including the differentials.
- Introduce the degree 0 parameter \(v\): \(y = vx\), to transform
it into the separable differential equation.
- This would make the equation invariant under the change of variable \(x\rightsquigarrow \lambda x\).
- This makes the division by \(x^k\) possible, simplifying the equation.
1.1.3.1. Isobaric
- Determine the order of \(y\) by making the equation homogeneous, and substitute \(y = vx^r\).
1.2. First Order Linear Inhomogeneous
- \[ y' + p(x)y = q(x) \]
- Transform the equation into an exact differential equation through
multiplying both side by the integrating factor \(\alpha\):
\[
\alpha = \exp\left(\int p(x)\,dx\right).
\]
- \[ \alpha(x)\,dy + \alpha(x)(p(x)y - q(x))\,dx = 0\quad\text{exact} \]
- Taking just one of the antiderivative is fine.
1.2.1. General Solution
- \[ y = \frac{1}{\alpha} \left(\int \alpha q\,dx\right) + \frac{C}{\alpha} \]
- The first term is the particular solution and the second term is the complementary solution.
1.2.2. Operator Analysis
- \[ (D + F)y = q \] where \(F\) is the multiplication operator by \(p(x)\).
- Find an operator that multiply by \(\mu(x)\), \(M\), such that
\(M^*\), the operator that multiply by the
logarithmic derivative of \(\mu(x)\), is \(F\):
- \[ M^*y = \frac{\mu'(x)}{\mu(x)} y = p(x)y = Fy \]
- \[ M^{-1}DM y = q \]
- The solution is directly given by: \[ y = M^{-1}D^{-1}M q. \]
1.3. First Order Nonlinear
1.3.1. Method of Parameter Introduction
1.3.1.1. p Introduction
- Most commonly, introduce \(p\) in the place of \(y'\) as an independent variable, and express the solution in terms of \(p\).
- Clairaut's equation
- D'Alembert's equation
1.3.1.2. z Introduction
- Introduce \(z\) such that \(y = f(x, z)\).
- Chrystal's equation
1.4. Special First Order
1.4.1. Bernoulli Differential Equation
- \[ y' + P(x)y = Q(x)y^n \]
- The substitution \(u = y/y^n\) reduces the equation to a linear
differential equation.
- \[ u = y^{1-n} = \frac{y}{y^n} \]
- \[ \frac{du}{dx} = (1-n)\frac{y'}{y^n} \]
- \[ u' + (1-n)P(x)u = (1-n)Q(x) \]
1.4.2. Clairaut's Equation
1.4.2.1. Equation
- \[ y = xy' + f(y') \] where \(f\) is continuously differentiable.
- It is a particular case of the Lagrange differential equation.
1.4.2.2. Solution
- Differentiate both side: \[ y' = y' + xy'' + f'(y')y'' \implies (x + f'(y'))y'' = 0 \]
- Hence, \[ y'' = 0\lor x + f'(y') = 0. \]
1.4.2.2.1. General Solution
- From the first equality, one can obtain the general solution of Clairaut's equation: \[ y' = C \implies y = Cx + f(C). \]
1.4.2.2.2. Singular Solution
- The second equality yields only one solution, called singular solution, which is the envelope of the family of general solutions.
The singular solution can be parameterized by setting \(y' = p\) as the independent variable:
\begin{cases} x &= -f'(p),\\ y &= f(p) - pf'(p) = xp + f(p).\\ \end{cases}
1.4.2.3. Generalization
- \[ u = xu_x + yu_y + f(u_x, u_y). \]
1.4.2.3.1. Solution
- General Solution
- One of them is:
- \[ u = C_1x + C_2y + f(C_1,C_2) \]
- Singular Solution
- Introduce independent variables \(p := u_x\) and \(q := u_y\).
- Differentiate both side once with \(p\) and once with \(q\):
- \[ x+\frac{\partial f}{\partial p} = 0,\quad y +\frac{\partial f}{\partial q} = 0 \]
- The parametric solution is:
- \[ \begin{cases} x &= -\partial_p f(p,q),\\ y &= -\partial_q f(p,q),\\ u &= f(p,q) - p\,\partial_p f(p,q) - q\,\partial_q f(p,q)\\ \end{cases} \]
1.4.2.4. Examples
1.4.2.4.1. Parabola
- \[
xy' + \frac{1}{y'} = y
\]
- If \(y''=0\): \[ y = ax + \frac{1}{a}. \]
- If \(y''\neq 0\): \[ y = \pm 2\sqrt{x}. \]
- an exceedingly interesting differential equation - YouTube
1.4.2.4.2. Circle
- \[ y = xy' + \sqrt{y'^2 + 1} \]
- \[ y = Cx + \sqrt{C^2+1} \]
1.4.3. D'Alembert's Equation
1.4.3.1. Equation
- \[ y = f(y')x+g(y') \]
1.4.3.2. Solution
- By the method of parameter introduction.
Differentiate both side with respect to \(p = y'\):
\begin{align*} &p\frac{dx}{dp} = f'(p)x + f(p)\frac{dx}{dp} + g'(p)\\ \implies &[p-f(p)]\frac{dx}{dp} - f'(p)x = g'(p). \end{align*}
The solution to this linear first order differential equation \(x = \xi(p; C)\) yields the solution:
\begin{cases} x &= \xi(p; C),\\ y &= f(p)\xi(p; C) + g(p). \\ \end{cases}
1.4.3.3. Lagrange Equation
- \[ F(y')x+ G(y')y = H(y') \]
1.4.4. Chrystal's Equation
1.4.4.1. Equation
- \[ y'^2 + Axy' + By + Cx^2 = 0 \] where \(A, B, C\) are constants.
1.4.4.2. Solution
- Transform by introduction of parameter, \(4By \rightsquigarrow (A^2 - 4C - z^2)x^2\): \[ xz\frac{dz}{dx} = A^2 + AB - 4C \pm Bz - z^2. \]
1.5. With Constant Coefficients
1.5.1. Homogeneous Solutions
- Trial Solution \(e^{rx}\).
1.5.2. Particular Solution
- If the inhomogeneous term is an oscillation with angular frequency \(\omega\), use the trial particular solution \(ze^{\omega x}\), in other words \(Ae^{\omega x + \phi}\) or \(A\cos(\omega x + \phi)\).
1.5.3. Operator Analysis
- The solutions \(\{r_i\}\) to the indicial equation factorizes the differential operator: \[ \left[\prod_{i} (D - r_i)\right]y = 0. \]
- If there's no multiplicity, the solution is given by the linear
combination of the solutions of each factor
- \[ (D - r_i)y_i = 0. \]
- If any multiplicity is present, the solution contains special
solutions:
\[
(D-r_i)^m y_{i} = 0
\]
where \(m\) is the multiplicity of \(r_i\).
- The solution is: \[ y_i = \left(\sum_{k=0}^{m-1}A_kx^k\right)e^{r_ix} \] with \(A_k\) being arbitrary.
- Linear differential equation - Wikipedia
1.6. Autonomous
1.6.1. Definition
- Differential equation without explicit dependence on the independent variable.
- When the independent variable is time, it is called time-invariant system.
1.6.2. Second Order Autonomous
- \[ \ddot{x} = f(x, \dot{x}). \]
- Using the method of parameter introduction: \[ v = \dot{x}. \]
- The equation becomes first order ordinary differential equation: \[ v\frac{dv}{dx} = f(x, v). \]
1.6.3. Higher Order Autonomous
- \[ \frac{d}{dt} \equiv v\frac{d}{dx} \]
- \[
\frac{d^3x}{dt^3} = v\left(\frac{dv}{dx}\right)^2 + v^2\frac{d^2v}{dx^2}
\]
- It introduces non linearity, sadly.
1.6.4. Jacobi Elliptic Function
- Jacobi elliptic functions
- \[
y'' + (1+m)y - 2my^3 = 0,\quad y'^2 = (1-y^2)(1-my^2)
\]
- \(\operatorname{sn}(x)\) solves this.
- \[
y'' + (1-2m)y + 2my^3 = 0,\quad y'^2 = (1-y^2)(1-m+my^2)
\]
- \(\operatorname{cn}(x)\) solve this.
- \[
y'' - (2-m)y + 2y^3 = 0,\quad y'^2 = (y^2-1)(1-m-y^2)
\]
- \(\operatorname{dn}(x)\) solve this.
1.6.5. Jacobi Amplitude
- Jacobi amplitude
- \[
\ddot{\theta} + c\sin\theta = 0
\]
- The solution is \[ \theta = 2\operatorname{am}\left(\frac{t\sqrt{2c}}{2}, 2\right). \]
1.6.6. Examples
- \[
yy'' + y^2 = \frac{1}{2}y'^2
\]
- \[ y = C_1\left(\sin\left(C_2 \pm \frac{x}{\sqrt{2}}\right) + 1\right) \]
- Here's a cool non linear differential equation - YouTube
- \[
\frac{y''}{(y')^2} = \frac{y}{y^2-1}
\]
- \[ y = \cosh(C_1x+C_2) \]
- Non-linear differential equations have strange solutions! - YouTube
- \[
y'' + e^y = 0
\]
- \[ y = \ln\left(\frac{2C_1^2C_2e^{C_1x}}{(1+C_2e^{C_1x})^2}\right) \]
- The integration is done with substitution \(v = \sqrt{C_1^2 - 2u}\) \[ \int \frac{du}{u\sqrt{C_1^2 - 2u}} = \int \frac{2dv}{v^2-C_1^2}. \]
- A deceivingly difficult differential equation. - YouTube
1.7. Linear Homogeneous
1.7.1. Solution
1.7.1.1. Reduction of Order
- D'Alembert Reduction
- The last linearly independent solution can be found using reduction of order.
- For an second order linear homogeneous ordinary differential equation with one of the solutions \(y_1\) known, the other solution can be found by substituting \(y = vy_1\), and solving for \(v\).
1.7.1.2. Using the Wronskian
- The last linearly independent solution can also be found using Wronskian.
- Wronskian yields the order-reduced inhomogeneous differential equation with respect to the last solution: \[ \sum_{i=1}^n W_{i,j}(x)y_j^{(i-1)} = W(x) \] where \(W_{i,j}\) denotes the \(i,j\)th minor of the Wronskian matrix,
- the particular solution is the last independent solution.
1.7.2. Initial Value Problem
- IVP, ((65ce2b87-7ca1-4686-ac44-c4971def6e4e))
- For a linear differential equation with known linearly independnet solutions, If the \(n\) initial values \(y(x_0), y'(x_0), \dots, y^{(n-1)}(x_0)\) is given, the coefficient \(C_i\) of the homogeneous solutions \(y = \sum C_i y_i\) can be determined by inverting the Wronskian matrix evaluated at \(x_0\):
- \[ \mathbf{C} = \mathbf{W}^{-1}(t_0)\mathbf{y}_0 \]
1.7.3. Series Solution
1.7.3.1. Power Series Method
- Given that the coefficients of a monic differential equation are
analytic at zero, the solution takes the form of power series:
\[
f = \sum_{k=0}^\infty A_kz^k.
\]
- If zero is the regular singular point, Frobenius method is to be applied instead.
- After substitution the coefficients of the reduced power series are required to be zero, yielding the recurrence relation.
1.7.3.1.1. Taylor Series
- Exponential Power Series
- \(f\) may be taken to be Taylor series instead, making the recurrence relation simpler: \[ f = \sum_{k=0}^\infty\frac{A_kz^k}{k!}. \]
1.7.3.2. Frobenius Method
- Method of Frobenius
- Method of finding the series solution at the regular singular point of a linear homogeneous differential equation.
- Assume the solution has the form of a Frobenius series: \[ u(z) = z^r\sum_{k=0}^{\infty}A_kz^k \] with \(A_0\neq 0\).
1.7.3.2.1. Indicial Polynomial
- The coefficient of the lowest power of \(z\) in the infinite series,
without \(A_k\).
- If the differential equation and the Frobenius series has the form described above, the indicial polynomial is: \[ I(r) := r(r-1) +p(0)r + q(0). \]
1.7.3.2.2. Indicial Equation
- \(I(r) = 0\)
- This yields the value of \(r\).
1.7.3.3. Parker-Sochacki Method
- Algorithm for solving systems of ordinary differential equations.
- It includes the Picard method.
- Parker–Sochacki method - Wikipedia
1.8. Linear Inhomogeneous
1.8.1. Particular Function
- Find a particular function \(Y(x)\), then the solution is the sum of the particular solution and the complementary solution (homogeneous solution): \(y = \sum_i c_iy_i + Y\).
1.8.2. Method of Undetermined Coefficients
- Ansatz, Trial Solution
- The particular solution is guessed to be of the form similar to
the inhomogeneous term.
- including, the inhomogeneous function and its derivatives.
1.8.2.1. Special Cases
- Trigonometric Function
- When the inhomogeneous term is a trigonometric function
\(a\cos(\omega t)\), use the trial solution:
- \[ c_1\cos(\omega t) + c_2\sin(\omega t) \]
- Or extend the equation to complex domain by replacing
\(a\cos(\omega t)\) with \(ae^{i\omega t}\), and use the
trial solutions:
- \[
ce^{i(\omega t - \phi)}
\]
- with \(c\in \mathbb{R}\)
- \[
ce^{i\omega t}
\]
- with \(c\in \mathbb{C}\)
- \[
ce^{i(\omega t - \phi)}
\]
- And solve for the undetermined constants.
- Especially when the differential equation is of the form
\(y'' + by' + cy = a\cos(\omega t)\),
- The extended solution is given by:
- \[
re^{i(\omega t - \phi)}
\]
- with
- \[ r = \frac{a}{\sqrt{(c-\omega^2)^2 + (b\omega)^2}} \]
- \[ \phi = \tan^{-1}\left(\frac{b\omega}{c-\omega^2}\right) \]
- Or by:
- \[ re^{i\omega t} \]
- with
- \[ c = \frac{a}{c-\omega^2 + ib\omega} \]
- When the inhomogeneous term is a trigonometric function
\(a\cos(\omega t)\), use the trial solution:
1.8.3. Variation of Parameters
- Variation of Constants
- The homogeneous solutions with variation of parameters can be used as the ansatz for the particular solution: \[ y = \sum_i u_i(x) y_i(x). \]
1.8.3.1. Particular Solution1
- Given an inhomogeneous linear differential equation: \[ \sum_{j=0}^n a_j(x)y^{(j)} = f(x) \] with \(a_n \equiv 1\),
The derivatives of \(y\) can be arbitrarily constrained with \((n-1)\) constraints:
\begin{align*} y &= \sum_{i=1}^n u_iy_i\\ y' &= \sum_{i=1}^n u_iy'_i, &\text{Constr. 1: } \sum_{i=1}^n u_i'y_i = 0\\ &\qquad\vdots\\ y^{(k)} &= \sum_{i=1}^n u_iy_i^{(k)}, &\text{Constr. $k$: } \sum_{i=1}^n u_i'y_i^{(k-1)} = 0\\ &\qquad\vdots\\ y^{(n)} &= \sum_{i=1}^n u_i'y_i^{(n-1)} + u_iy_i^{(n)}, &\text{No Constr.}\\ \end{align*}- The last constraint is found by substituting these into the equation: \[ \sum_{i=1}^n u_i'y_i^{(n-1)} = f(x). \]
- All the constraints can be summerized using Wronskian matrix \(\mathbf{W}\) of homogeneous solutions: \[ \mathbf{W}\begin{bmatrix} u_1' \\ \vdots \\ u_n' \end{bmatrix} = \begin{bmatrix} 0 \\ \vdots \\ f(x) \end{bmatrix}. \]
- Therefore the particular solution can be found with: \[
y = \left({\Large \int} \mathbf{W}^{-1} \begin{bmatrix} 0 \\ \vdots \\ f(x) \end{bmatrix}\mathrm{d}x\right) \cdot \begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix}
\] here \(\mathbf{W}\) is guaranteed to be invertible, since
\(y_i\) are linearly independent.
- For the second order case, the particular solution is given by: \[ y = y_2\int\frac{y_1f}{W}\mathrm{d}x -y_1\int \frac{y_2f}{W}\mathrm{d}x. \]
1.9. Special Second Order
1.9.1. Sturm-Liouville Theory
- Theory of Sturm-Liouville problem: \[ \frac{\mathrm{d}}{\mathrm{d}x}\left[p(x)\frac{\mathrm{d}y}{\mathrm{d}x}\right] + q(x)y = -\lambda w(x)y \] with given functions \(p, q, w\) and boundary conditions.
- It solves the differential equation by finding the eigenvalues and eigenfunctions of the Hermitian operator, or self-adjoint operator, on the Hilbert space of functions satisfying the boundary conditions.
1.9.1.1. Regular Sturm-Liouville Problem
- The coefficient functions \(p, q, w\) and the derivative \(p'\) are all continuous on \([a, b]\)
- \(p(x) > 0\) and \(w(x) > 0\) for all \(x\in[a,b]\)
- The problem has separated boundary conditions of the form:
- \[ \alpha_1y(a) + \alpha_2y'(a) = 0 \] with \(\alpha_1, \alpha_2\) are not both 0.
- \[ \beta_1y(b) + \beta_2y'(b) = 0 \] with \(\beta_1, \beta_2\) are no both 0.
1.9.1.2. Time-Independent Schrödinger Equation
1.9.1.3. Chebyshev Differential Equations
- \[
(1-x^2)y'' - xy' + n^2y = 0
\]
- Chebyshev Equation, Chebyshev's Equation
1.9.1.3.1. Solutions
- The solutions can be obtained by power series:
- \[ F(x) = 1 - \frac{n^2}{2!}x^2 + \frac{(n-2)n^2(n+2)}{4!}x^4 - \frac{(n-4)(n-2)n^2(n+2)(n+4)}{6!}x^6+\cdots \]
- \[ G(x) = x - \frac{(n-1)(n+1)}{3!}x^3 + \frac{(n-3)(n-1)(n+1)(n+3)}{5!}x^5-\cdots \]
- They are the basis for the entire set of solutions that satisfy the recurrence relation: \[ a_{k+2} = \frac{(k-n)(k+n)}{(k+1)(k+2)}a_k. \]
- If \(n\) is a non-negative integer, one of the solutions
terminates.
- If \(n\) is even, \(F\) terminates, and if \(n\) is odd,
\(G\) terminates. When they terminate, they are related
to the Chebyshev polynomials of the first kind with:
- \[ T_n(x) = (-1)^{n/2}F(x), \]
- \[ T_n(x) = (-1)^{(n-1)/2}nG(x). \]
- If \(n\) is even, \(F\) terminates, and if \(n\) is odd,
\(G\) terminates. When they terminate, they are related
to the Chebyshev polynomials of the first kind with:
- \[
(1-x^2)y'' - 3xy' + n(n+2)y = 0
\]
- Chebyshev polynomials of the second kind \(U_n(x)\) solve this.
1.9.1.4. Legendre's Differential Equation
- \[
(1-x^2)y''-2xy' + n(n+1)y = 0
\]
- Legendre polynomials \(P_n(x)\) solve this.
- Legendre functions of the second kind \(Q_n(x)\) also solve this.
- The Sturm-Liouville form is: \[ \frac{d}{dx}\left((1-x^2)\frac{d}{dx}\right)y = -\lambda y \] with eigenvalue \(\lambda = n(n+1)\).
- Legendre polynomials \(P_n(x)\) solve this.
1.9.1.5. Laguerre's Differential Equation
- \[
xy'' + (1-x)y' + ny = 0
\]
- Laguerre polynomials \(L_n(x)\) solve this.
- The Sturm-Liouville form is: \[ \left(xe^{-x}L_n(x)'\right)' = -ne^{-x}L_n(x). \]
1.9.1.6. (Physicist's) Hermite's Differential Equation
\[ y'' - 2xy' + 2ny = 0 \]
- Physicist's Hermite polynomials \(H_n(x)\) solve this.
- The second solution is the confluent hypergeometric functions of the first kind:
- \[ {}_1F_1(-\tfrac{n}{2};\tfrac{1}{2};x^2) \]
- The Sturm-Liouville form is: \[ \left(e^{-x^2/2}y'\right)' = -ne^{-x^2/2}y. \]
1.9.1.7. Bessel's Differential Equation
- \[
x^2y''+xy' + (x^2-\alpha^2)y = 0
\]
- Solutions are the Bessel function.
1.9.2. Riemann's Differential Equation
1.9.2.1. Definition
- \[ \frac{d^2w}{dz^2} + \left[\frac{1-\alpha - \alpha'}{z-a} + \frac{1-\beta - \beta'}{z-b} + \frac{1-\gamma -\gamma'}{z-c}\right]\frac{dw}{dz} + \left[\frac{\alpha\alpha'(a-b)(a-c)}{z-a} + \frac{\beta\beta'(b-c)(b-a)}{z-b} + \frac{\gamma\gamma'(c-a)(c-b)}{z-c}\right]\frac{w}{(z-a)(z-b)(z-c)} = 0 \]
1.9.2.2. Properties
- It has three regular singular points: \(a,b,c\).
1.10. Special Higher Order
1.10.1. Cauchy-Euler Equation
- Euler-Cauchy Equation, Euler's Equation, Equidimensional Equation
1.10.1.1. Equation
- Cauchy-Euler equation of order \(n\) is of the form: \[ \sum_{k=0}^n a_k x^k y^{(k)}(x) = 0 \]
1.10.1.2. Solutions
- The solution is one of the form:
- \[ \begin{cases} C_1x^{m_1} + C_2x^{m_2}, & \text{distinct roots}\\ (C_1\ln x + C_2)x^{m}, & \text{repeated roots}\\ C_1x^{\alpha}\cos(\beta\ln x) + C_2x^{\alpha}\sin(\beta\ln x), & \text{complex roots}. \end{cases} \]
1.10.1.3. Derivation
1.10.1.3.1. Change of Variables
- \(t = \ln x\), \(y(x) = \varphi(\ln x) = \varphi(t)\)
- The equation becomes the differential equation with constant coefficients.
- Note the solution for the repeated root case becomes \[ (C_1t+C_2)e^{r_1t} = (C_1\ln x + C_2)x^{r_1} \]
1.10.1.3.2. Operator Analysis
- The differential equation can always be transformed into the form: \[ ((xD)^n + \alpha_{n-1}(xD)^{n-1} + \cdots + \alpha_0)y\\ = \left[\prod_i((xD) - \lambda_i)\right]y = 0. \]
- Note that:
\[
(xD)^n = \sum_{k=1}^n \left\{n \atop k\right\} x^kD^k
\]
- where \(\left\{n \atop k\right\}\) is the Stirling number of the second kind.
1.11. Nonlinear
- Non-linear, Non-separable, Non-autonomous equations are almost always not solvable via analytic method.
1.11.1. Method of Successive Approximations
- One kind of iterative method
- It also can refer to any iterative method.
- 연차근사법
- The first approximation is a trial solution that often resembles the solution to the linear portion of the eqution.
- The error term can now be calculated and appended to the next iteration of the solution.
1.11.2. Perturbative Method
- Perturbation Theory
- 섭동 이론, 건드림 이론
- Control the problematic term by multiplying it with a small parameter \( \varepsilon \).
- For nice equations, the unknowns can be written as a power series in terms of \( \varepsilon \).
1.11.3. Lambert W Function
- \[
x(1+y)y' - y = 0
\]
- Lambert W function solves this.
1.11.4. Integrating Factor
- Integrating Factor become almost useless when the order is higher than 3.
1.11.4.1. Definition
- Function that facilitate solving a differential equation.
1.11.4.2. Examples
- First order linear inhomogeneous case
- \[
y'' = Ay^r
\]
- \(M = y'\)
- can be integrated with the factor of \( y' \) multiplied on both sides: \[ \frac{1}{2}(y')^2 = \frac{A}{r+1}y^{r+1} + B. \]
- \[
y'' + 2p(x) y' + (p(x)^2 + p'(x))y = q(x)
\]
- with \(2p(x)\) being \(p_1(x)\).
1.11.4.3. Conversion to Depressed Form
- If one want to collapse the derivatives into \((M(x)y)^{(n)}\) by
multiplying both side by \(M(x)\), the only possible integrating
factor is:
- \[ M(x) = \exp\left(\int \frac{p_1(x)}{n}\,dx\right) \]
- This restricts coefficients of any lower order derivative terms into a single function.
- The substitution \(y = \mu(x)v(x) = v(x)/M(x)\) removes the \(n-1\) order term.
1.12. First Order Coupled Nonlinear
- The overall behavior of the system can be predicted with the Jacobian
- \[ \mathbf{J} = \begin{bmatrix} f_x & f_y \\ g_x & g_y \end{bmatrix} \]
- Or the transpose of it.
- It tells how the velocity vectors change over the space.
- The system is at a stable point if the Jacobian is negative-definite
at a fixed point , and otherwise it is at an unstable point.2
- It is a saddle point if the sign of the eigenvalues are different and divergent point if both eigenvalues are positive.
1.13. Properties
1.13.1. Linearity
- The differential equation \[ Ly = f \] is linear if \(L\) is linear in \(y\).
- The set of solutions form a flat hypersurface in the function
space.
- Therefore the general solution is of the form: \[ y = \sum_i C_iy_i \]
1.13.1.1. Nonlinear
- The set of solutions form a curved hypersurface in the function
space.
- The general solution is not a linear combination of linearly independent solutions.
1.14. Singularity
- Singularities are used to classify differential equations, or to test for the existence of the series solution.
1.14.1. Regular Singularity
- Generally, for a monic differential equation: \[ f^{(n)}(z) + \sum_{i=0}^{n-1}p_i(z)f^{(i)}(z) = 0 \] \(p_{n-i}(z)\) has a pole of order at most \(i\) at \(a\).
- For a linear homogeneous second-order ordinary differential equation: \[ y'' + P(x)y' + Q(x)y = 0, \] \(P(x)\) or \(Q(x)\) diverge at \(a\), but \(P(x)\) diverges slower than or as the pole of order 1 and \(Q(x)\) diverges slower than or as the pole of order 2.
1.14.2. Irregular Singularity
- Essential Singularity
- At least one of the coefficients diverges faster than the regular counterpart.
1.14.3. Singularity at Infinity
- The equation has a singularity at infinity, if the Möbius transformation \(x = 1/w\) yields a equation with a singularity at 0.
- \[ \frac{d}{dx} \leadsto -w^2\frac{d}{dw}, \quad -x^2\frac{d}{dx} \leadsto \frac{d}{dw} \]
- \[ x^2\frac{d^2}{dx^2}\leadsto 2w\frac{d}{dw} + w^2\frac{d^2}{dw^2}, \quad \frac{d^2}{dx^2}\leadsto 2w^3\frac{d}{dw} + w^4\frac{d^2}{dw^2} \]
2. Partial Differential Equations
2.1. First Order Quasilinear
2.1.1. Lagrange's Linear Equation
2.1.1.1. Equation
- \[ Pp + Qq = R \] where \(P, Q, R\) are functions of \(x, y, z\), and \(p = z_x, q = z_y\).
2.1.1.2. Solution
- Assume the solution has the form \(\phi(u, v) = 0\), with \(u, v\) being the functions of \(x, y, z\).
- Observe that taking partial derivative of \(\phi\) with respect to \(x\) and \(y\) yields the differential equation.
- Note that \(z\) is being considered to be the function of \(x\)
and \(y\). Since the original differential equation described it
as so.
- The determinant is zero for nonzero derivatives of \(\phi\),
Yielding the differential equation: \[
(u_yv_z - u_zv_y)p + (u_zv_x - u_xv_z)q = (u_xv_y - u_yv_x) \iff Pp + Qq = R.
\]
- One can know that \(u\) and \(v\) must satisfy this relation.
- The determinant is zero for nonzero derivatives of \(\phi\),
Yielding the differential equation: \[
(u_yv_z - u_zv_y)p + (u_zv_x - u_xv_z)q = (u_xv_y - u_yv_x) \iff Pp + Qq = R.
\]
- Set \(u = a\) and \(v = b\) as constants, and examine \(x, y, z\)
that are on the level sets of \(u\) and \(v\).
- This is motivated by the implicit function \(\phi(u, v) = 0\), since it is possible.3
Take the differential of \(u\) and \(v\):
\begin{cases} du = u_xdx + u_y dy + u_z dz = 0,\\ dv = v_xdx + v_y dy + v_z dz = 0. \end{cases}- From this linear system the ratio relation can be obtained: \[ \frac{dx}{u_yv_z - u_zv_y} = \frac{dy}{u_zv_x - u_xv_z} = \frac{dz}{u_xv_y - u_yv_x}. \]
2.1.1.3. Auxiliary Equation
- Lagrange's System of Ordinary Differential Equations, Auxiliary Equations, Subsidiary Equations, Lagrange-Charpit Equations.
- In other words, \[ \frac{dx}{P} = \frac{dy}{Q} = \frac{dz}{R}. \]
2.1.1.4. Integrate the auxiliary equations, and set it to u and v.
The solution \(\phi(u, v) = 0\) with arbitrary function \(\phi\) is now obtained.
- If the auxiliary equations is not directly integrable, try the mediant of them.
- Reduction of the degree of freedom is possible: \(dx + dy = dt\).
2.1.2. Method of Characteristics
2.1.2.1. Fully First Order Linear Homogeneous With Constant Coefficients
- \[ \sum_{i=1}^na_iu_{x_i} = 0 \]
- The coordinate in the direction of the characteristic curves is
\(s = \sum a_ix_i\), since the gradient \(\nabla u\) is orthogonal
to the coefficients vector and \(s\) is in the direction of the
coefficients.
- After change of variables, the equation transforms into \[ \left(\sum_{i=1}^na_i^2\right)\hat{u}_s = 0 \]
- The other parameter can be determined, either by
inspection, or from the equation of the characteristic
curves obtained from the auxiliary equation
- \[ \frac{dx_i}{a_i} = \frac{dx_j}{a_j} \implies \frac{x_i}{a_i} - \frac{x_j}{a_j} = C \]
- Since \(C\) can be anything, one can take it (or scalar multiple of it) to be the first independent variable \(t_1\) that is orthogonal to the characteristic curves.
- Then the next \(t_k\) can be selected to be orthogonal to every previous \(t_n\)
2.1.2.2. First Order Linear Inhomogeneous With Semi-Constant Coefficients
- \[ \sum_{i=1}^n a_iu_{x_i} + q(x_1,\dots, x_n)u = f(x_1, \dots, x_n) \]
- Applying the change of variable \(s = \sum a_ix_i, t_n \text{ orthogonal}\) transforms the equation into an ordinary one in \(s\).
- \[ \left(\sum_{i=1}^na_i^2\right)\hat{u}_s + \hat{q}\hat{u} = \hat{f} \]
- which then can be solved as first order linear inhomogeneous ordinary differential equation.
2.1.2.3. First Order Quasilinear
- The surface of solution \(x_{n+1} = u\) of the differential equation \[ \sum_{i=1}^na_i(x_1,\dots, x_{n}, u) u_{x_i} = a_{n+1}(x_1, \dots, x_n, u) \]
has the normal vector
\begin{bmatrix} u_{x_1} \\ \vdots \\ \vphantom{1}u_{x_n} \\ -1 \end{bmatrix}- since this is the gradient of the equation of the surface \(F(x_1, \dots, x_{n+1}) = u - x_{n+1} = 0\).
Therefore, the vector field
\begin{bmatrix} a_1 \\ \vdots \\ \vphantom{1}a_n \\ a_{n+1} \end{bmatrix}is always tangent to the surface \(F = 0\),
- and by computing the integral curve \[ \frac{dx_i}{dt} = a_i \] the solution \(u = x_{n+1}(x_1, \dots, x_n)\) is obtained.
2.2. Second Order Linear
- The operator can be factored into the product of the first order differential operator.
2.2.1. Elliptic
2.2.1.1. Laplace's Equation
- Laplace's Differential Equation
2.2.1.1.1. Definition
- \[ \nabla^2 f = 0 \] where \(\nabla^2\) is the Laplacian.
2.2.1.2. Helmholtz Equation
- \[ \nabla^{\cdot 2} F + k^2 F = 0 \]
2.2.2. Hyperbolic
- The strong maximum principle does not apply. Hence, the solution for Dirichlet problem is not unique.
- Cauchy boundary condition needs to be given.
2.2.3. Parabolic
2.2.4. Separable in Arguments
2.2.4.1. Separation of Variables
- Make an ansatz that the solution is of the form: \[ \phi(x_1, x_2,\dots, x_n) = \prod_{i=1}^{n}\phi_i(x_i). \]
- So that the the equation transforms into the the form: \[ F_i(x_i, \phi_i, \phi_i', \dots) = F_j(x_j,\phi_j, \phi_j',\dots) \]
- And then using the independence of the variables, obtain \(n\) ordinary differential equations: \[ F_i(x_i, \phi_i, \phi_i',\dots) = C. \]
- It can be used when the differential equation is linear, and the boundary conditions are separable.
- It might not work for a differential equation with non-constant coefficients. (Chat GPT)
It is applicable if the equation has the form: \[ Tu = Su \] where \( T \) and \( S \) are compact, self-adjoint differential operators in different variables.
2.3. Properties
2.3.1. Linearity
2.3.1.1. Linear
- Linear in the dependent variable and its derivatives with the coefficients depending only on the independent variables.
- e.g.
- \[ a_1(x,y)\frac{\partial^2u}{\partial x^2} + a_2(x,y)\frac{\partial^2u}{\partial x\partial y} + a_3(x,y)\frac{\partial^2u}{\partial y^2} + a_4(x,y)\frac{\partial u}{\partial x} + a_5(x,y)\frac{\partial u}{\partial y} + a_6(x,y)= 0 \]
2.3.1.2. Semilinear
- Linear only in the highest order derivatives with the coefficients depending only on the independent variables.
- e.g.
- \[ a_1(x,y)\frac{\partial^2u}{\partial x^2} + a_2(x,y)\frac{\partial^2u}{\partial y^2} + f(x,y,u,u, u_y)= 0 \]
2.3.1.3. Quasilinear
- Linear in the highest order derivatives with the coefficients possibly depending on the dependent variables and its non-highest order derivatives.
- e.g.
- \[ a_1(x,y,u,u_x, u_y)\frac{\partial^2u}{\partial x^2} + a_2(x,y,u,u_x,u_y)\frac{\partial^2u}{\partial y^2} + f(x,y,u,u_x, u_y)= 0 \]
2.3.1.4. Fully Nonlinear
- One or more of the highest order derivatives is nonlinear.
- e.g. Monge-Ampére equation
2.3.2. Well-Posedness
- Schematic informations about a partial differential equation usually
given in the form of theorems or assumptions:
- Existence of the solution
- Uniqueness to an initial value problem
- The continuity of the solution in the free constants.
3. Green's Function
3.1. Definition
- Given a linear differential operator \( \mathrm{L} \), its Green's function \( G \) satisfies:
- \[ \mathrm{L}G(x, \xi)=\delta(x-\xi). \]
- where \( \delta \) is the Dirac delta function
3.2. Derivation
- \begin{align*} &f(\xi)\mathrm{L}G(x, \xi) = f(\xi)\delta(x-\xi)\\ \implies &\mathrm{L}f(\xi)G(x,\xi) = f(\xi)\delta(x-\xi)\\ \implies &\int_\Omega \mathrm{L}f(\xi)G(x,\xi)\,d\xi = \int_\Omega f(\xi)\delta(x-\xi)\,d\xi\\ \implies & \mathrm{L}\int_\Omega f(\xi)G(x,\xi)\,d\xi = \int_\Omega f(\xi)\delta(x-\xi)\,d\xi = f(x).\\ \end{align*}
- Think of the Green's function as the potential created by a point at
which the density is infinite. The density at a point is the
\(\mathrm{L}\) of the whole potential which is the infinite sum of
the potential at all points.
Green's function encodes the contribution of point \(\xi\) to the value at \(\mathrm x\).
\[ \mathrm{L}y(x)=f( x)\implies y( x)=\int_\Omega f(\xi)G(\mathrm x, \xi)\,d\xi \]
3.3. Properties
- Green's function is the impulse response.
- The solution to the differential equation is the convolution with the Green's function.
3.4. Examples
- For 3D Laplace Operator
\[ -\frac{1}{4\pi |\mathbf{r} - \boldsymbol{\xi}| } \]
4. Laplace Transformation
5. Hamiltonian Flow
- The state subjected to a conservative force, evolves according to
the Hamilton's equations
- The flow of states is in the direction of the equal Hamiltonian.
6. Matrix Differential Equation
- \[ \dot{\mathbf{x}}(t) = \mathbf{A}(t)\mathbf{x}(x) \]
7. Wronskian
- Does the coefficients for the individual solutions exists, given a initial condition.
7.1. Definition
- The determinant of the specific matrix
- \[ W(f_1, \ldots, f_n) (x)= \det \begin{bmatrix} f_1(x) & f_2(x) & \cdots & f_n(x) \\ f_1'(x) & f_2'(x) & \cdots & f_n' (x)\\ \vdots & \vdots & \ddots & \vdots \\ f_1^{(n-1)}(x)& f_2^{(n-1)}(x) & \cdots & f_n^{(n-1)}(x) \end{bmatrix}. \]
7.2. Properties
7.2.1. Linear Independence
- If the \(f_i\) are linearly dependent, the Wronskian is identically equal to 0.
7.3. Abel's Identity
- Abel's Formula, Abel's Differential Equation Identity
- Originally, for the second order linear homogeneous ordinary differential equation.
7.3.1. Statement
- For \(n\)th order linear homogeneous differential equation \[ y^{(n)} + p_{n-1}(x)\,y^{(n-1)} + \cdots + p_1(x)\,y' + p_0(x)\,y = 0, \]
- The Wronskian satisfies \[ W'(x) = -p_{n-1}(x)W(x), \]
- thus, it is given by: \[ W(x) = W(x_0)\exp\left(-\int_{x_0}^x p_{n-1}(t)\,\mathrm{d}t\right). \]
- This is special case of the Liouville's formula
7.3.2. Proof
7.3.2.1. Direct Proof
7.3.2.2. Using Liouville's Formula
- Notice that Wronskian satisfies: \[ \mathbf{W}' = \begin{bmatrix}0&1&0&\cdots&0\\ 0&0&1&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&1\\ -p_0(x)&-p_1(x)&-p_2(x)&\cdots&-p_{n-1}(x)\end{bmatrix} \mathbf{W} \]
- By the 8, \[ \det(\mathbf{W})' = -p_{n-1}(x)\det{\mathbf{W}} \]
8. Liouville's Formula
8.1. Statement
- For the fundamental matrix \(\boldsymbol{\Phi}\) of the matrix differential equation on an interval \(I\): \[ \mathbf{y}' = \mathbf{A}(t)\mathbf{y} \]
- The determinant of \(\boldsymbol{\Phi}\) satisfies: \[ (\det\boldsymbol{\Phi})' = \mathop{\rm tr}\mathbf{A} \det{\boldsymbol{\Phi}}. \]
- Therefore, explicitly \[ \det\boldsymbol{\Phi} = \det\boldsymbol{\Phi}_0 \exp\left(\int_{t_0}^t\mathop{\rm tr}\mathbf{A}\,du\right) \] on the interval \(I\).
8.2. Proof
- It is readily proved with the formula obtained by the Leibniz formula for determinants
\[
(\det\mathbf{\Phi})' = \sum_{i=1}^n \det\begin{pmatrix}
\Phi_{1,1}&\Phi_{1,2}&\cdots&\Phi_{1,n}\\
\vdots&\vdots&&\vdots\\
\Phi'_{i,1}&\Phi'_{i,2}&\cdots&\Phi'_{i,n}\\
\vdots&\vdots&&\vdots\\
\Phi_{n,1}&\Phi_{n,2}&\cdots&\Phi_{n,n}
\end{pmatrix}.
\]
- Due to the property of ../algebra/Linear Algebra.html#org8003138 \[ \det\begin{pmatrix} \Phi_{1,1}&\Phi_{1,2}&\cdots&\Phi_{1,n}\\ \vdots&\vdots&&\vdots\\ \Phi'_{i,1}&\Phi'_{i,2}&\cdots&\Phi'_{i,n}\\ \vdots&\vdots&&\vdots\\ \Phi_{n,1}&\Phi_{n,2}&\cdots&\Phi_{n,n} \end{pmatrix} = a_{i,i}\det\boldsymbol{\Phi}. \]
9. Boundary Conditions
9.1. Initial Value Problem
- IVP
- Initial values are required
9.2. Boundary Value Problem
- BVP
- Boundary values are required
9.3. Initial Boundary Value Problem
- IBVP
- Both are required
9.4. Dirichlet Problem
9.4.1. Dirichlet Boundary Condition
- The value at the boundary
9.4.2. Problem
- Given a boundary condition \(f\) on a sufficiently smooth boundary, called Dirichlet boundary condition, is there a unique sufficiently differentiable function \(u\) in the interior satisfying certain differential equation.
- Originally the problem is to find the harmonic functions satisfying the boundary condition.
9.4.3. Existence
9.4.3.1. For Harnomic Function
- The solution exists if
- the boundary is sufficiently smooth
- the boundary condition \(f\) is Hölder continuous \(C^{1,\alpha}\) with \(\alpha \in (0, 1)\).
9.4.4. Uniqueness
9.4.4.1. Maximum Principle
- For a second order partial differential equation,
- the maximum is attained at the boundary for any precompact subset of the domain.
- At the maxima, the function \(u\) has to satisfy
- \(du = 0\)
- \(\nabla^{\cdot 2}u \le 0\), as a Loewner order
9.4.4.2. For Linear Elliptic PDE
- Applying maximum principle to elliptic PDE, that is, the eigenvalues at each point is positive, the second derivatives must be all zero at the hypothetical maximum.
- If there exists two distinct solution to a linear Dirichlet problem, then the difference is the solution to the homogeneous equation. The difference violates the maximum principle, since it is zero at the boundary and nonzero on the interior.
9.4.4.2.1. Strong Maximum Principle
- The solution of an linear elliptic partial differential equation does not attain maxima on the interior of the domain unless it is a constant function.
9.5. Cauchy Problem
9.5.1. Cauchy Boundary Condition
- Dirichlet boundary condition, the value at the boundary, and the Neumann boundary condition, the normal derivative on the boundary.
9.5.2. Problem
On a smooth manifold \(S\subset \mathbb{R}^{n+1}\) of dimension \(n\), called the Cauchy surface, find the functions \(u_1, \dots u_m\) with respect to the independent variables \(t, x_1,\dots, x_n\) that satisfies: \[ \frac{\partial^{n_i}u_i}{\partial t^{n_i}} = F_i\left(t,x_1,\dots,x_n,u_1,\dots,u_m,\dots,\frac{\partial^k u_j}{\partial t^{k_0}\partial x_1^{k_1}\dots\partial x_n^{k_n}},\dots\right) \] for \( i,j = 1,2,\dots,m\), \( k_0+k_1+\dots+k_n=k\leq n_j \), and \( k_0 < n_j \),
subject to the condition that for some value \(t = t_0\): \[ \frac{\partial^k u_i}{\partial t^k}=\phi_i^{(k)}(x_1,\dots,x_n) \quad \text{for } k=0,1,2,\dots,n_i-1 \] where \(\phi_i^{(k)}(x_1,\dots,x_n)\) are given functions defined on the surface \(S\), collectively known as the Cauchy data of the problem.
10. Existence and Uniqueness
10.1. Picard-Lindelöf Theorem
- Picard's Existence Theorem, Cauchy-Lipschitz Theorem, Existence and Uniqueness Theorem
- First Order Ordinary matrix differential equation
10.1.1. Statement
- For a closed rectangle \(D\subseteq \mathbb{R}\times \mathbb{R}^n\) with \((t_0, y_0)\in \operatorname{int}D\),
- if a function \(f\colon D\to \mathbb{R}^n\) is continuous in \(t\) and Lipschitz continuous in \(y\),
- then there exists \(\varepsilon > 0\) such that the initial value problem \(y'(t) = f(t, y(t)),\ y(t_0) = y_0\) has a unique solution \(y(t)\) on the interval \([t_0 - \varepsilon, t_0+\varepsilon]\).
10.1.2. Proof
- The proof uses the Banach fixed point theorem on the Picard Iteration.
10.1.3. Picard Iteration
- Picard's Method, Picard Iterative Process
- Notice:
- \[ y'(t) = f(t,y(t)) \implies y(t) - y(t_0) = \int_{t_0}^t f(s,y(s))\,ds. \]
- Set \(\varphi_0(t) = y_0\) and iterate:
- \[ \varphi_{k+1}(t) = y_0 + \int_{t_0}^tf(s,\varphi_k(s))\,ds \]
- By the Banach fixed point theorem, \(\varphi_k\) is convergent, and solves the differential equation.
10.2. Peano Existence Theorem
- Peano Theorem, Cauchy-Peano Theorem
- First Order Ordinary Differential Equation
10.2.1. Statement
- For a continuos function \(f\colon D \to \mathbb{R}\) on an open subset \(D\subset\mathbb{R}\times\mathbb{R}\), the initial value problem \(y(x_0) = y_0\) for the differential equation \(y' = f(x,y)\) has a local solution \(y\colon I\to \mathbb{R}\) where \(I\) is a neighborhood of \(x_0\).
- The solution may not be unique.
10.3. Carathéodory's Existence Theorem
- Generalization of Peano's existence theorem
- First Order Ordinary Differential Equation
10.3.1. Statement
- \(f(x, y)\) on a rectangular domain is continuous in \(x\) and measurable in \(t\), and there exists a Lebesgue-integrable function \(0 \le m(x) < \infty\) such that \(|f(x,y)| \le m(x)\) for all \((x,y)\),
- Then the differential equation has a local solution in the extended sense.
10.4. Cauchy-Kovalevskaya Theorem
- Cauchy-Kowalevski Theorem
- First Order Matrix Partial Differential Equation
10.4.1. Stataement
- For analytic matrix-valued functions \(\mathbf{A}_i\), the quasilinear Cauchy problem: \[ \partial_{x_n}\mathbf{y} = \sum_{i=1}^{n-1}\mathbf{A}_i\partial_{x_i}\mathbf{y} \]
- with boundary condition \(\mathbf{y}(\mathbf{x}) = \mathbf{0}\) on the hypersurface \(x_n = 0\), has a unique local solution.
10.5. Holmgren's Uniqueness Theorem
- Conormal Bundle
- \[ N^*\Sigma := \{(x,\xi) \in T^*\mathbb{R}^n : x\in \Sigma, \xi\big|_{T_x\Sigma} = 0\}. \]
- Elliptic Regularity
- \(\operatorname{Char} P\): Analysis.html#orgd20ecf4
- Sobolev space
10.6. Malgrange-Ehrenpreis Theorem
10.6.1. Statement
- Every non-zero differential operator with constant coefficients has a Green's function. Therefore, the partial differential equation with constant coefficients admits a solution: \[ P\left(\partial_{x_1}, \partial_{x_2}, \dots, \partial_{x_n}\right)u = f. \]
- Related to the Hahn-Banach theorem
10.7. Lewy's Example
- Linear partial differential equation with polynomial coefficients may not have a solution.
10.7.1. Example
- There exists a smooth function \(F\colon \mathbb{R}\times \mathbb{C}\to \mathbb{C}\) such that the differential equation: \[ \frac{\partial u}{\partial \overline{z}} - iz\frac{\partial u}{\partial t} = F(t, z) \] admits no solution.
- Related to the Cauchy-Riemann manifold.
11. Stochastic Differential Equation
- SDE
- Differential Equation with stochastic process
- \[ dX_t = \mu(X_t, t)\,dt + \sigma(X_t, t)\,dW_t. \]
11.1. Fokker-Planck Equation
The probability distribution over the possible values the stochastic process can take follows the Fokker-Planck equation: \[ \frac{\partial u}{\partial t} = - \frac{\partial }{\partial x} (\mu u) + \frac{\partial^2}{ \partial x^2} \left( \frac{\sigma^2}{2}u \right). \]
12. External Links
- VisualPDE
- The software for solving complicated partial differential equation does not exists.
13. References
- Separation of variables - Wikipedia
- Clairaut's equation - Wikipedia
- Lagrange equation - Encyclopedia of Mathematics
- d'Alembert's equation - Wikipedia
- Chrystal's equation - Wikipedia
- Bernoulli differential equation - Wikipedia
- Autonomous system (mathematics) - Wikipedia
- Method of characteristics - Wikipedia
- https://en.wikipedia.org/wiki/Green%27s_function
- https://en.wikipedia.org/wiki/Linear_differential_equation
- Differential Equations. All Basics for Physicists. - YouTube
- Reduction of order - Wikipedia
- Chebyshev equation - Wikipedia
- Jacobi Elliptic Function Differential Equations - YouTube
- Power series solution of differential equations - Wikipedia
- Sturm–Liouville theory - Wikipedia
- Arfken et al. Mathematical Methods for Physicists. Chap. 8.
- Riemann's differential equation - Wikipedia
- Cauchy–Euler equation - Wikipedia
- Fowles. Classical Mechanics. 3.7 The Nonlinear Oscillator: Method of Successive Approximations.
- Perturbation theory - Wikipedia
- Integrating factor - Wikipedia
- 수리물리학 13-2 라그랑지 일계 편미분방정식 1 - YouTube
- Helmholtz equation - Wikipedia
- Frobenius method - Wikipedia
- Regular singular point - Wikipedia
- https://en.wikipedia.org/wiki/Matrix_differential_equation
- Linearizing Nonlinear Differential Equations Near a Fixed Point - YouTube
- Numerical methods for partial differential equations - Wikipedia
- Wronskian - Wikipedia
- Abel's identity - Wikipedia
- Liouville's formula - Wikipedia
- Cauchy problem - Wikipedia
- Dirichlet problem - Wikipedia
- Maximum principle - Wikipedia
- Peano existence theorem - Wikipedia
- Picard–Lindelöf theorem - Wikipedia
- Stochastic differential equation - Wikipedia